269 research outputs found

    The interrelation between data and AI ethics in the context of impact assessments

    Get PDF
    In the growing literature on artificial intelligence (AI) impact assessments, the literature on data protection impact assessments is heavily referenced. Given the relative maturity of the data protection debate and that it has translated into legal codification, it is indeed a natural place to start for AI. In this article, we anticipate directions in what we believe will become a dominant and impactful forthcoming debate, namely, how to conceptualise the relationship between data protection and AI impact. We begin by discussing the value canvas i.e. the ethical principles that underpin data and AI ethics, and discuss how these are instantiated in the context of value trade-offs when the ethics are applied. Following this, we map three kinds of relationships that can be envisioned between data and AI ethics, and then close with a discussion of asymmetry in value trade-offs when privacy and fairness are concerned

    Algorithms in future capital markets: A survey on AI, ML and associated algorithms in capital markets

    Get PDF
    This paper reviews Artificial Intelligence (AI), Machine Learning (ML) and associated algorithms in future Capital Markets. New AI algorithms are constantly emerging, with each 'strain' mimicking a new form of human learning, reasoning, knowledge, and decisionmaking. The current main disrupting forms of learning include Deep Learning, Adversarial Learning, Transfer and Meta Learning. Albeit these modes of learning have been in the AI/ML field more than a decade, they now are more applicable due to the availability of data, computing power and infrastructure. These forms of learning have produced new models (e.g., Long Short-Term Memory, Generative Adversarial Networks) and leverage important applications (e.g., Natural Language Processing, Adversarial Examples, Deep Fakes, etc.). These new models and applications will drive changes in future Capital Markets, so it is important to understand their computational strengths and weaknesses. Since ML algorithms effectively self-program and evolve dynamically, financial institutions and regulators are becoming increasingly concerned with ensuring there remains a modicum of human control, focusing on Algorithmic Interpretability/Explainability, Robustness and Legality. For example, the concern is that, in the future, an ecology of trading algorithms across different institutions may 'conspire' and become unintentionally fraudulent (cf. LIBOR) or subject to subversion through compromised datasets (e.g. Microsoft Tay). New and unique forms of systemic risks can emerge, potentially coming from excessive algorithmic complexity. The contribution of this paper is to review AI, ML and associated algorithms, their computational strengths and weaknesses, and discuss their future impact on the Capital Markets

    Generative adversarial networks for financial trading strategies fine-tuning and combination

    Get PDF
    Systematic trading strategies are algorithmic procedures that allocate assets aiming to optimize a certain performance criterion. To obtain an edge in a highly competitive environment, an analyst needs to appropriately fine-tune their strategy, or discover how to combine weak signals in novel alpha creating manners. Both aspects, namely fine-tuning and combination, have been extensively researched using several methods, but emerging techniques such as Generative Adversarial Networks can have an impact on such aspects. Therefore, our work proposes the use of Conditional Generative Adversarial Networks (cGANs) for trading strategy calibration and aggregation. To this end, we provide a full methodology on: (i) the training and selection of a cGAN for time series data; (ii) how each sample is used for strategy calibration; and (iii) how all generated samples can be used for ensemble modelling. To provide evidence that our approach is well grounded, we have designed an experiment with multiple trading strategies, encompassing 579 assets. We compared cGAN with an ensemble scheme and model validation methods, both suited for time series. Our results suggest that cGANs are a suitable alternative for strategy calibration and combination, providing outperformance when the traditional techniques fail to generate any alpha

    Algorithm Auditing: Managing the Legal, Ethical, and Technological Risks of Artificial Intelligence, Machine Learning, and Associated Algorithms

    Get PDF
    Algorithms are becoming ubiquitous. However, companies are increasingly alarmed about their algorithms causing major financial or reputational damage. A new industry is envisaged: auditing and assurance of algorithms with the remit to validate artificial intelligence, machine learning, and associated algorithms

    DetectA: abrupt concept drift detection in non-stationary environments

    Get PDF
    Almost all drift detection mechanisms designed for classification problems work reactively: after receiving the complete data set (input patterns and class labels) they apply a sequence of procedures to identify some change in the class-conditional distribution – a concept drift. However, detecting changes after its occurrence can be in some situations harmful to the process under analysis. This paper proposes a proactive approach for abrupt drift detection, called DetectA (Detect Abrupt Drift). Briefly, this method is composed of three steps: (i) label the patterns from the test set (an unlabelled data block), using an unsupervised method; (ii) compute some statistics from the train and test sets, conditioned to the given class labels for train set; and (iii) compare the training and testing statistics using a multivariate hypothesis test. Based on the results of the hypothesis tests, we attempt to detect the drift on the test set, before the real labels are obtained. A procedure for creating datasets with abrupt drift has been proposed to perform a sensitivity analysis of the DetectA model. The result of the sensitivity analysis suggests that the detector is efficient and suitable for datasets of high-dimensionality, blocks with any proportion of drifts, and datasets with class imbalance. The performance of the DetectA method, with different configurations, was also evaluated on real and artificial datasets, using an MLP as a classifier. The best results were obtained using one of the detection methods, being the proactive manner a top contender regarding improving the underlying base classifier accuracy

    Neuroevolutionary learning in nonstationary environments

    Get PDF
    This work presents a new neuro-evolutionary model, called NEVE (Neuroevolutionary Ensemble), based on an ensemble of Multi-Layer Perceptron (MLP) neural networks for learning in nonstationary environments. NEVE makes use of quantum-inspired evolutionary models to automatically configure the ensemble members and combine their output. The quantum-inspired evolutionary models identify the most appropriate topology for each MLP network, select the most relevant input variables, determine the neural network weights and calculate the voting weight of each ensemble member. Four different approaches of NEVE are developed, varying the mechanism for detecting and treating concepts drifts, including proactive drift detection approaches. The proposed models were evaluated in real and artificial datasets, comparing the results obtained with other consolidated models in the literature. The results show that the accuracy of NEVE is higher in most cases and the best configurations are obtained using some mechanism for drift detection. These results reinforce that the neuroevolutionary ensemble approach is a robust choice for situations in which the datasets are subject to sudden changes in behaviour

    QuantNet: transferring learning across trading strategies

    Get PDF
    Systematic financial trading strategies account for over 80% of trade volume in equities and a large chunk of the foreign exchange market. In spite of the availability of data from multiple markets, current approaches in trading rely mainly on learning trading strategies per individual market. In this paper, we take a step towards developing fully end-to-end global trading strategies that leverage systematic trends to produce superior market-specific trading strategies. We introduce QuantNet: an architecture that learns market-agnostic trends and use these to learn superior market-specific trading strategies. Each market-specific model is composed of an encoder-decoder pair. The encoder transforms market-specific data into an abstract latent representation that is processed by a global model shared by all markets, while the decoder learns a market-specific trading strategy based on both local and global information from the market-specific encoder and the global model. QuantNet uses recent advances in transfer and meta-learning, where market-specific parameters are free to specialize on the problem at hand, whilst market-agnostic parameters are driven to capture signals from all markets. By integrating over idiosyncratic market data we can learn general transferable dynamics, avoiding the problem of overfitting to produce strategies with superior returns. We evaluate QuantNet on historical data across 3103 assets in 58 global equity markets. Against the top performing baseline, QuantNet yielded 51% higher Sharpe and 69% Calmar ratios. In addition, we show the benefits of our approach over the non-transfer learning variant, with improvements of 15% and 41% in Sharpe and Calmar ratios. A link to QuantNet code is made available in the appendix

    Overview and commentary of the CDEI's extended roadmap to an effective AI assurance ecosystem

    Get PDF
    In recent years, the field of ethical artificial intelligence (AI), or AI ethics, has gained traction and aims to develop guidelines and best practices for the responsible and ethical use of AI across sectors. As part of this, nations have proposed AI strategies, with the UK releasing both national AI and data strategies, as well as a transparency standard. Extending these efforts, the Centre for Data Ethics and Innovation (CDEI) has published an AI Assurance Roadmap, which is the first of its kind and provides guidance on how to manage the risks that come from the use of AI. In this article, we provide an overview of the document's vision for a “mature AI assurance ecosystem” and how the CDEI will work with other organizations for the development of regulation, industry standards, and the creation of AI assurance practitioners. We also provide a commentary of some key themes identified in the CDEI's roadmap in relation to (i) the complexities of building “justified trust”, (ii) the role of research in AI assurance, (iii) the current developments in the AI assurance industry, and (iv) convergence with international regulation

    Innovation and opportunity: review of the UK’s national AI strategy

    Get PDF
    The publication of the UK’s National Artificial Intelligence (AI) Strategy represents a step-change in the national industrial, policy, regulatory, and geo-strategic agenda. Although there is a multiplicity of threads to explore this text can be read primarily as a ‘signalling’ document. Indeed, we read the National AI Strategy as a vision for innovation and opportunity, underpinned by a trust framework that has innovation and opportunity at the forefront. We provide an overview of the structure of the document and offer an emphasised commentary on various standouts. Our main takeaways are: Innovation First: a clear signal is that innovation is at the forefront of UK’s data priorities. Alternative Ecosystem of Trust: the UK’s regulatory-market norms becoming the preferred ecosystem is dependent upon the regulatory system and delivery frameworks required. Defence, Security and Risk: security and risk are discussed in terms of utilisation of AI and governance. Revision of Data Protection: the signal is that the UK is indeed seeking to position itself as less stringent regarding data protection and necessary documentation. EU Disalignment—Atlanticism?: questions are raised regarding a step back in terms of data protection rights. We conclude with further notes on data flow continuity, the feasibility of a sector approach to regulation, legal liability, and the lack of a method of engagement for stakeholders. Whilst the strategy sends important signals for innovation, achieving ethical innovation is a harder challenge and will require a carefully evolved framework built with appropriate expertise

    Algorithms: Law and Regulations

    Get PDF
    The legal status of AI and algorithms continues to be debated. Resume-sifting algorithms exhibit unethical, discriminatory, and illegal behavior; crime-sentencing algorithms are unable to justify their decisions; and autonomous vehicles' predictive analytics software will make life and death decisions
    corecore